Distributed-memory tensor completion for generalized loss functions in python using new sparse tensor kernels

نویسندگان

چکیده

Tensor computations are increasingly prevalent numerical techniques in data science, but pose unique challenges for high-performance implementation. We provide novel algorithms and systems infrastructure which enable efficient parallel implementation of tensor completion with generalized loss functions. Specifically, we consider alternating minimization, coordinate a quasi-Newton (generalized Gauss-Newton) method. By extending the Cyclops library, implement all these methods high-level Python syntax. To make possible very sparse tensors, introduce new multi-tensor primitives, specialized implementations. compare routines to pairwise contraction tensors by reduction hypersparse matrix formats, find that more theoretical cost execution time experiments. microbenchmarking results on Stampede2 supercomputer demonstrate efficiency primitives functionality. then study performance synthetic 10 billion nonzeros Netflix dataset, considering both least squares Poisson

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Probabilistic Tensor Factorization for Tensor Completion

Multi-way tensor datasets emerge naturally in a variety of domains, such as recommendation systems, bioinformatics, and retail data analysis. The data in these domains usually contains a large number of missing entries. Therefore, many applications in those domains aim at missing value prediction, which boils down to a tensor completion problem. While tensor factorization algorithms can be a po...

متن کامل

A New Convex Relaxation for Tensor Completion

We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We ...

متن کامل

Interpolation using Hankel tensor completion

We present a novel multidimensional seismic trace interpolator that works on constant-frequency slices. It performs completion on Hankel tensors whose order is twice the number of spatial dimensions. Completion is done by fitting a PARAFAC model using an Alternating Least Squares algorithm. The new interpolator runs quickly and can better handle large gaps and high sparsity than existing comple...

متن کامل

Tensor completion in hierarchical tensor representations

Compressed sensing extends from the recovery of sparse vectors from undersampled measurements via efficient algorithms to the recovery of matrices of low rank from incomplete information. Here we consider a further extension to the reconstruction of tensors of low multi-linear rank in recently introduced hierarchical tensor formats from a small number of measurements. Hierarchical tensors are a...

متن کامل

Tensor Completion

The purpose of this thesis is to explore the methods to solve the tensor completion problem. Inspired by the matrix completion problem, the tensor completion problem is formulated as an unconstrained nonlinear optimization problem, which finds three factors that give a low-rank approximation. Various of iterative methods, including the gradient-based methods, stochastic gradient descent method ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Parallel and Distributed Computing

سال: 2022

ISSN: ['1096-0848', '0743-7315']

DOI: https://doi.org/10.1016/j.jpdc.2022.07.005